3.7 Minimax Estimation

1 Definition

The idea is to minimize the worst-case risk minimizeδsupθR(θ;δ).

Minimax Risk

The minimum achievable sup-risk is called the minimax risk of the estimation problem r=infδsupθR(θ;δ).
An estimator δ is called minimax if it achieves the minimax risk: r=supθR(θ;δ).

Game theory interpretation:

Nature chooses θ adversarially, not X.

Compare to Bayes estimator ("maximin" game):

  1. Nature chooses prior Λ.
  2. Analyst chooses estimator to minimize the average risk.

2 Least Favorable Priors

The key observation is: average-case risk worst-case risk. So for any proper prior Λ: rΛ=infδR(θ;δ)dΛ(θ)BayesinfδsupθR(θ;δ)minimax=r.
Prior Λ is least favorable if it gives best lower bound, i.e. rΛ=supΛrΛ.
For any Λ, rΛrΛrsupθR(θ;δ).

Theorem

Suppose Λ is a prior, with Bayes estimator δΛ. If rΛ=supθR(θ;δΛ), then

  1. δΛ is minimax.
  2. If δΛ is unique Bayes (up to a.s.) for Λ, it is unique minimax.
  3. Λ is least favorable.

So when is average risk = sup risk?
This is true if

2.1 Least Favorable Prior Sequence

Sometimes there is no least favorable prior. Like XN(θ,1).

Lease Favorable Prior Sequence

A sequence Λ1,Λ2, is least favorable if rΛnsupΛrΛ.

Theorem

Λ1,Λ2, is a prior sequence, and δ satisfies supθR(θ;δ)=limnrΛn. Then

  1. δ is minimax.
  2. Λ1,Λ2, is least favorable.

3 Bounding Minimax Risk

Our theorem gives an idea of how to bound r for a problem:

Minimax estimators are very hard to find but minimax bounds are often used in statistics theory to characterize hardness (especially lower bound).

Example

  • Propose practical estimator δ, find Λ for which rΛ is close to supθR(θ;δ). (or same rate, or converges asymptotically) Conclude δ can't be improved "much".
  • Quantify hardness of a problem by its minimax rate in some asymptotic regime.
  • Caveat: a problem might be easy throughout most of parameter space but very hard in some bizarre corner you never encounter in practice.